562 research outputs found

    Counter-intuitive throughput behaviors in networks under end-to-end control

    Get PDF
    It has been shown that as long as traffic sources adapt their rates to aggregate congestion measure in their paths, they implicitly maximize certain utility. In this paper we study some counter-intuitive throughput behaviors in such networks, pertaining to whether a fair allocation is always inefficient and whether increasing capacity always raises aggregate throughput. A bandwidth allocation policy can be defined in terms of a class of utility functions parameterized by a scalar a that can be interpreted as a quantitative measure of fairness. An allocation is fair if alpha is large and efficient if aggregate throughput is large. All examples in the literature suggest that a fair allocation is necessarily inefficient. We characterize exactly the tradeoff between fairness and throughput in general networks. The characterization allows us both to produce the first counter-example and trivially explain all the previous supporting examples. Surprisingly, our counter-example has the property that a fairer allocation is always more efficient. In particular it implies that maxmin fairness can achieve a higher throughput than proportional fairness. Intuitively, we might expect that increasing link capacities always raises aggregate throughput. We show that not only can throughput be reduced when some link increases its capacity, more strikingly, it can also be reduced when all links increase their capacities by the same amount. If all links increase their capacities proportionally, however, throughput will indeed increase. These examples demonstrate the intricate interactions among sources in a network setting that are missing in a single-link topology

    Understanding CHOKe: throughput and spatial characteristics

    Get PDF
    A recently proposed active queue management, CHOKe, is stateless, simple to implement, yet surprisingly effective in protecting TCP from UDP flows. We present an equilibrium model of TCP/CHOKe. We prove that, provided the number of TCP flows is large, the UDP bandwidth share peaks at (e+1)/sup -1/=0.269 when UDP input rate is slightly larger than link capacity, and drops to zero as UDP input rate tends to infinity. We clarify the spatial characteristics of the leaky buffer under CHOKe that produce this throughput behavior. Specifically, we prove that, as UDP input rate increases, even though the total number of UDP packets in the queue increases, their spatial distribution becomes more and more concentrated near the tail of the queue, and drops rapidly to zero toward the head of the queue. In stark contrast to a nonleaky FIFO buffer where UDP bandwidth shares would approach 1 as its input rate increases without bound, under CHOKe, UDP simultaneously maintains a large number of packets in the queue and receives a vanishingly small bandwidth share, the mechanism through which CHOKe protects TCP flows

    Modelling and stability of FAST TCP

    Get PDF
    We introduce a discrete-time model of FAST TCP that fully captures the effect of self-clocking and compare it with the traditional continuous-time model. While the continuous-time model predicts instability for homogeneous sources sharing a single link when feedback delay is large, experiments suggest otherwise. Using the discrete-time model, we prove that FAST TCP is locally asymptotically stable in general networks when all sources have a common round-trip feedback delay, no matter how large the delay is. We also prove global stability for a single bottleneck link in the absence of feedback delay. The techniques developed here are new and applicable to other protocols

    Network equilibrium of heterogeneous congestion control protocols

    Get PDF
    When heterogeneous congestion control protocols that react to different pricing signals share the same network, the resulting equilibrium may no longer be interpreted as a solution to the standard utility maximization problem. We prove the existence of equilibrium under mild assumptions. Then we show that multi-protocol networks whose equilibria are locally non-unique or infinite in number can only form a set of measure zero. Multiple locally unique equilibria can arise in two ways. First, unlike in the single-protocol case, the set of bottleneck links can be non-unique with heterogeneous protocols even when the routing matrix has full row rank. The equilibria associated with different sets of bottleneck links are necessarily distinct. Second, even when there is a unique set of bottleneck links, network equilibrium can still be non-unique, but is always finite and odd in number. They cannot all be locally stable unless it is globally unique. Finally, we provide various sufficient conditions for global uniqueness. Numerical examples are used throughout the paper to illustrate these results

    Cross-layer optimization in TCP/IP networks

    Get PDF
    TCP-AQM can be interpreted as distributed primal-dual algorithms to maximize aggregate utility over source rates. We show that an equilibrium of TCP/IP, if exists, maximizes aggregate utility over both source rates and routes, provided congestion prices are used as link costs. An equilibrium exists if and only if this utility maximization problem and its Lagrangian dual have no duality gap. In this case, TCP/IP incurs no penalty in not splitting traffic across multiple paths. Such an equilibrium, however, can be unstable. It can be stabilized by adding a static component to link cost, but at the expense of a reduced utility in equilibrium. If link capacities are optimally provisioned, however, pure static routing, which is necessarily stable, is sufficient to maximize utility. Moreover single-path routing again achieves the same utility as multipath routing at optimality

    Hysteretic performance research on high strength circular concrete-filled thin-walled steel tubular columns

    Full text link
    [EN] Under violent earthquake motions, the severe damage in critical regions of structures could be ascribed to cumulative damage caused by cyclic loading. Using the high strength (HS) materials in concrete-filled steel tubular (CFST) columns is the effective way and popular tendency to promote the seismic behavior in anti-seismic design. In this paper, an experimental study on the hysteretic performance of high strength circular concrete-filled thin-walled steel tubular columns (HCFTST) columns was carried out. A total of six specimens were tested under constant axial compression combining cyclic lateral loading. The tested parameters were the different combinations of diameter-to-thickness (D/t) ratio, axial compression ratio (n) and concrete cylinder compressive strength (fc).The failure modes, load-displacement hysteretic curves, skeleton curves, dissipated energy and stiffness degradation were examined in detail. Through the experiment analysis result, it indicates that the ultimate limit state is reached as the severe local buckling and rupture of the steel tubes accompanying the core concrete crushing occur. Using high strength materials could have a larger elastic deformation capacity and the higher axial compression ratio within test scopes could motivate the potential of HS materials. In brief, the HCFTST columns with ultra-large D/t ratios under reasonable design could perform excellent hysteretic performance, which can be applied in earthquake-prone regions widely.Wang, J.; Sun, Q. (2018). Hysteretic performance research on high strength circular concrete-filled thin-walled steel tubular columns. En Proceedings of the 12th International Conference on Advances in Steel-Concrete Composite Structures. ASCCS 2018. Editorial Universitat Politècnica de València. 717-723. https://doi.org/10.4995/ASCCS2018.2018.7287OCS71772

    Noisy Computing of the OR\mathsf{OR} and MAX\mathsf{MAX} Functions

    Full text link
    We consider the problem of computing a function of nn variables using noisy queries, where each query is incorrect with some fixed and known probability p(0,1/2)p \in (0,1/2). Specifically, we consider the computation of the OR\mathsf{OR} function of nn bits (where queries correspond to noisy readings of the bits) and the MAX\mathsf{MAX} function of nn real numbers (where queries correspond to noisy pairwise comparisons). We show that an expected number of queries of (1±o(1))nlog1δDKL(p1p) (1 \pm o(1)) \frac{n\log \frac{1}{\delta}}{D_{\mathsf{KL}}(p \| 1-p)} is both sufficient and necessary to compute both functions with a vanishing error probability δ=o(1)\delta = o(1), where DKL(p1p)D_{\mathsf{KL}}(p \| 1-p) denotes the Kullback-Leibler divergence between Bern(p)\mathsf{Bern}(p) and Bern(1p)\mathsf{Bern}(1-p) distributions. Compared to previous work, our results tighten the dependence on pp in both the upper and lower bounds for the two functions
    corecore